18 research outputs found

    The History and Prehistory of Natural-Language Semantics

    Get PDF
    Contemporary natural-language semantics began with the assumption that the meaning of a sentence could be modeled by a single truth condition, or by an entity with a truth-condition. But with the recent explosion of dynamic semantics and pragmatics and of work on non- truth-conditional dimensions of linguistic meaning, we are now in the midst of a shift away from a truth-condition-centric view and toward the idea that a sentence’s meaning must be spelled out in terms of its various roles in conversation. This communicative turn in semantics raises historical questions: Why was truth-conditional semantics dominant in the first place, and why were the phenomena now driving the communicative turn initially ignored or misunderstood by truth-conditional semanticists? I offer a historical answer to both questions. The history of natural-language semantics—springing from the work of Donald Davidson and Richard Montague—began with a methodological toolkit that Frege, Tarski, Carnap, and others had created to better understand artificial languages. For them, the study of linguistic meaning was subservient to other explanatory goals in logic, philosophy, and the foundations of mathematics, and this subservience was reflected in the fact that they idealized away from all aspects of meaning that get in the way of a one-to-one correspondence between sentences and truth-conditions. The truth-conditional beginnings of natural- language semantics are best explained by the fact that, upon turning their attention to the empirical study of natural language, Davidson and Montague adopted the methodological toolkit assembled by Frege, Tarski, and Carnap and, along with it, their idealization away from non-truth-conditional semantic phenomena. But this pivot in explana- tory priorities toward natural language itself rendered the adoption of the truth-conditional idealization inappropriate. Lifting the truth-conditional idealization has forced semanticists to upend the conception of linguistic meaning that was originally embodied in their methodology

    Primordial Nucleosynthesis for the New Cosmology: Determining Uncertainties and Examining Concordance

    Full text link
    Big bang nucleosynthesis (BBN) and the cosmic microwave background (CMB) have a long history together in the standard cosmology. The general concordance between the predicted and observed light element abundances provides a direct probe of the universal baryon density. Recent CMB anisotropy measurements, particularly the observations performed by the WMAP satellite, examine this concordance by independently measuring the cosmic baryon density. Key to this test of concordance is a quantitative understanding of the uncertainties in the BBN light element abundance predictions. These uncertainties are dominated by systematic errors in nuclear cross sections. We critically analyze the cross section data, producing representations that describe this data and its uncertainties, taking into account the correlations among data, and explicitly treating the systematic errors between data sets. Using these updated nuclear inputs, we compute the new BBN abundance predictions, and quantitatively examine their concordance with observations. Depending on what deuterium observations are adopted, one gets the following constraints on the baryon density: OmegaBh^2=0.0229\pm0.0013 or OmegaBh^2 = 0.0216^{+0.0020}_{-0.0021} at 68% confidence, fixing N_{\nu,eff}=3.0. Concerns over systematics in helium and lithium observations limit the confidence constraints based on this data provide. With new nuclear cross section data, light element abundance observations and the ever increasing resolution of the CMB anisotropy, tighter constraints can be placed on nuclear and particle astrophysics. ABRIDGEDComment: 54 pages, 20 figures, 5 tables v2: reflects PRD version minor changes to text and reference
    corecore